Exponential growth in a stochastic environment

Let us start by a few reminders about population dynamics. The basis of population dynamics is exponential growth \(\frac{dN(t)}{dt} = rN\). You can also note that this corresponds to a constant per capita growth rate (fitness of the average individual) \(\frac{1}{N} \frac{dN(t)}{dt} = r\). By putting terms in \(N\) on one side and terms in \(t\) in the other, one can show that \(N(t) = N(0) \exp(rt)\) or even \(N(t+1) = N(t) \exp(r)\), often noted \(N_{t+1} = N_t e^{r}\).

What on Earth could this highly theoretical construct have to do with time series, you might think? Well, here you go.

Let’s denote the log-abundance \(x_t = \log(N_t)\). Now we have \(x_{t+1} = x_t + r\). As it turns out, the stochastic model with environmental stochasticity has log-normal noise, so on the log-scale it is simply \(x_{t+1} = x_t + r + \epsilon_t, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2)\). This is the simplest model that can be fitted to time series data, and mathematically speaking this is a biased random walk with mean \(r\) (i.e., it is more likely to go up than down). An \(r\) close to zero will indicate relative stability.

This model has all sorts of interesting properties due to the stochastic part, such as the fact that the probability of hitting zero, including with very positive \(r\), can be very large if variance is large. Let’s simulate and fit that model.

set.seed(42)
r=0.1
tmax = 50
x=x1=rep(0,tmax)
for (t in 1:(tmax-1)){x[t+1] = x[t] + r + rnorm(1,0,sqrt(0.1))}
for (t in 1:(tmax-1)){x1[t+1] = x1[t] + r + rnorm(1,0,sqrt(0.25))}
minx=min(c(x,x1))
maxx=max(c(x,x1))
par(mfrow=c(2,1),pch=20)
plot(1:tmax,x,type="o",ylim=c(minx,maxx))
lines(1:tmax,x1,type="o",col="blue")
plot(1:tmax,exp(x),type="o",ylim=c(exp(minx),exp(maxx)))
lines(1:tmax,exp(x1),type="o",col="blue")

Data in a list:

m10.data <- list(x=x, tmax = tmax)
m11.data <- list(x=x1, tmax = tmax)
data {                                             // observed variables 
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variables
 
}
parameters {                                       // unobserved parameters
  real r;
  real<lower=0> sigma; 
}
model {

  //priors
  r ~ normal(0,1);
  sigma ~ exponential(10);
  
  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(x[t] + r,sigma); // likelihood 
  }

}
fit.m1.0 <- sampling(m1.1, data = m10.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6645f4d9f663a6256ab124969a93e4fd.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.09    0.00 0.05  0.03  0.09  0.15   743    1
## sigma  0.36    0.00 0.04  0.32  0.36  0.41   804    1
## lp__  19.62    0.04 0.86 18.45 19.86 20.46   549    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:50:47 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

Keeping in mind that \(\sqrt{0.1} = 0.31\) and\(\sqrt{0.25} = 0.5\). Let’s do that with another dataset

fit.m1.1 <- sampling(m1.1, data = m11.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6645f4d9f663a6256ab124969a93e4fd.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##       mean se_mean   sd  10%  50%  90% n_eff Rhat
## r     0.15    0.00 0.07 0.07 0.15 0.24   821    1
## sigma 0.46    0.00 0.05 0.40 0.45 0.52   534    1
## lp__  6.89    0.05 1.02 5.49 7.19 7.82   371    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:50:49 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

And again with two other datasets, to see if could have some bias here (or if it is just low precision).

x2=x3=rep(0,tmax)
for (t in 1:(tmax-1)){x2[t+1] = x2[t] + r + rnorm(1,0,sqrt(0.1))}
for (t in 1:(tmax-1)){x3[t+1] = x3[t] + r + rnorm(1,0,sqrt(0.1))}

m12.data <- list(x=x2, tmax = tmax)
m13.data <- list(x=x3, tmax = tmax)

fit.m1.2<- sampling(m1.1, data = m12.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6645f4d9f663a6256ab124969a93e4fd.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.02    0.00 0.04 -0.03  0.02  0.07   715    1
## sigma  0.28    0.00 0.03  0.25  0.28  0.32   773    1
## lp__  32.80    0.05 1.13 31.30 33.14 33.75   528    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:50:51 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
fit.m1.3<- sampling(m1.1, data = m13.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6645f4d9f663a6256ab124969a93e4fd.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.10    0.00 0.05  0.05  0.10  0.16   491    1
## sigma  0.31    0.00 0.03  0.27  0.31  0.35   593    1
## lp__  28.31    0.05 1.05 26.90 28.68 29.24   412    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:50:51 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

There does not seem to be bias on \(\sigma\). Still uniform priors for the SD are often suggested to avoid issues. Let’s try that.

data {                                             // observed variables 
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variables
 
}
parameters {                                       // unobserved parameters
  real r;
  real<lower=0> sigma; 
}
model {

  //priors
  r ~ normal(0,1);
  sigma ~ uniform(0,1);                         //1 is a high variance, we're much below. 
  
  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(x[t] + r,sigma); // likelihood 
  }

}

We now fit the model to two datasets having \(\sigma = \sqrt{0.1} = 0.31\) and one having \(\sigma = \sqrt{0.25} = 0.5\)

fit.m1.4<- sampling(m1.2, data = m10.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.4, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5f4974c1b181e593ae26894671ff262f.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.09    0.00 0.05  0.02  0.08  0.16   828 1.01
## sigma  0.38    0.00 0.04  0.33  0.37  0.43   832 1.00
## lp__  23.20    0.04 0.95 21.88 23.50 24.09   563 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:51:33 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
fit.m1.5<- sampling(m1.2, data = m12.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.5, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5f4974c1b181e593ae26894671ff262f.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.03    0.00 0.04 -0.03  0.03  0.08   858    1
## sigma  0.29    0.00 0.03  0.26  0.29  0.33   504    1
## lp__  35.67    0.05 0.99 34.38 35.97 36.53   415    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:51:34 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
### for the last one we take the sigma = 0.5 parameter set
fit.m1.6<- sampling(m1.2, data = m11.data, iter = 1000, chains = 2, cores = 2)
print(fit.m1.6, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5f4974c1b181e593ae26894671ff262f.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.15    0.00 0.07  0.06  0.15  0.24   607    1
## sigma  0.48    0.00 0.05  0.42  0.47  0.55   922    1
## lp__  11.47    0.05 1.00 10.07 11.76 12.39   403    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:51:34 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

Let’s investigate the correlations then. All good.

mcmc_pairs(as.array(fit.m1.4), np = nuts_params(fit.m1.4),
           pars = c("r","sigma"), 
           off_diag_args = list(size = 1, alpha = 1/3),
           np_style = pairs_style_np(div_size=3, div_shape = 19))

What have we learned so far?

  • Estimating a population growth rate can seem very hard on a highly variable, unregulated 50-year time series with one point per year, if we try to recover the true parameters (!). But there’s no systematic bias nor identifiability issues.
  • The likely cause of the issue is that for a highly variable stochastic process, the realized long-term growth rate can be very different from one time series to the next. This is useful to keep in mind for any realization of a highly variable stochastic process. A formal investigation of this might vary systematically time series length.
  • Regulated populations, that we study next, tend to be for the most part a little less variable from one time series to the next (there are exceptions), but with possibly more complex relationships between parameters.
mean(m10.data$x[2:tmax]-m10.data$x[1:(tmax-1)]) #realized mean growth rate dataset m10
## [1] 0.08425807
mean(m12.data$x[2:tmax]-m12.data$x[1:(tmax-1)]) #realized mean growth rate dataset m12
## [1] 0.02281295
minx=min(c(m10.data$x,m12.data$x))
maxx=max(c(m10.data$x,m12.data$x))
par(pch=20)
plot(1:tmax,m10.data$x,type="o",ylim=c(minx,maxx),ylab="log(N)",xlab="Time")
lines(1:tmax,m12.data$x,type="o",col="blue")

Gompertz growth and the AR(1) process

You might be shocked that the next chapter is not logistic growth? Well, from a statistical perspective:

\[N_{t+1} = N_t e^{a - \beta \log(N_t) + \epsilon_t}, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2). \]

which is rigorously equivalent to

\[N_{t+1} = N_t e^{a} \frac{1}{N_t^{\beta}} \times \exp( \epsilon_t ). \]

Usually this blows people’s minds, we have to recall that \(\beta \log(N_t) = \log(N_t^\beta)\) so that \(\exp(\beta\log(N_t)) = \exp(\log(N_t^\beta)) = N_t^\beta\). One of the important ideas about that model is that \(b<1\) (usually) will mediate the effect of individuals at low vs high densities; as population size increase individuals have less of a negative effect. It is also extraordinarily statistically convenient, because this equation is also equivalent to

\[x_{t+1} = a - \beta x_t + \epsilon_t = a + b x_t + \epsilon_t, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2).\]

Convenient, isn’t it? Now we can use this versatile model to test whether population growth slows down when density increases, just by fitting a linear time series model. This is called the AR(1) model – AutoRegressive model of order 1. It has a very famous continuous-time counterpart called the Ornstein-Uhlenbeck process, which has physical origins; you can think of it as a ball that’s attracted to center with a pull strength proportional to the distance to the center. In a deterministic setting, the ball converges to the center (\(x=a/(1-b)=a/\beta\) here). But random perturbations (\(\epsilon_t\)) make it oscillate around the center.

There are many applications and multivariate generalisations of this simple model in aquatic ecology, including fisheries. For much more on this an state-space models (in Stan as well), see the course by Eli Holmes, Eric Ward and Mark Sheuerell. Here we will only cover the basic Gompertz growth, with only one state, the process state (no observation error).

[Simulation and stan fit – showing correlations between parameters as well.]

a=1
b=0.8 # this is 1-beta, so the "pull strength" beta = 0.2 here (and if b=1, we have a random walk)
tmax = 50
x=x1=rep(0,tmax)
for (t in 1:(tmax-1)){x[t+1] = a + b*x[t] + rnorm(1,0,sqrt(0.1))}
for (t in 1:(tmax-1)){x1[t+1] = a + b*x[t] + rnorm(1,0,sqrt(0.25))}
minx=min(c(x,x1))
maxx=max(c(x,x1))
par(mfrow=c(2,1),pch=20)
plot(1:tmax,x,type="o",ylim=c(minx,maxx))
lines(1:tmax,x1,type="o",col="blue")
plot(1:tmax,exp(x),type="o",ylim=c(exp(minx),exp(maxx)))
lines(1:tmax,exp(x1),type="o",col="blue")

Now let’s fit that model

data {                                             // observed variables 
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variables
 
}
parameters {                                       // unobserved parameters
  real a;                                          //growth rate when log(N) = 0, not a true max with N=density
  real b;
  real<lower=0> sigma; 
}
model {

  //priors
  a ~ normal(0,1);
  b ~ normal(0,1);
  sigma ~ exponential(10);
  
  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(a+b*x[t],sigma); // likelihood 
  }

}
m20.data <- list(x=x, tmax = tmax)
m21.data <- list(x=x1, tmax = tmax)
fit.m2.0 <- sampling(m2.1, data = m20.data, iter = 1000, chains = 2, cores = 2)
print(fit.m2.0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 057f3ae53d7d669d6e6006a01bb1ae37.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## a      1.33    0.01 0.18  1.09  1.33  1.56   438 1.01
## b      0.73    0.00 0.04  0.68  0.73  0.78   436 1.01
## sigma  0.27    0.00 0.03  0.24  0.27  0.31   567 1.00
## lp__  33.20    0.06 1.21 31.57 33.47 34.42   366 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:52:19 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

Keeping in mind that \(\sqrt{0.1} = 0.31\) and\(\sqrt{0.25} = 0.5\). Let’s do that with another dataset

fit.m2.1 <- sampling(m2.1, data = m21.data, iter = 1000, chains = 2, cores = 2)
print(fit.m2.1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 057f3ae53d7d669d6e6006a01bb1ae37.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## a       1.82    0.02 0.37   1.35   1.84   2.27   339 1.01
## b       0.62    0.00 0.08   0.53   0.62   0.71   333 1.01
## sigma   0.64    0.00 0.06   0.57   0.64   0.72   400 1.00
## lp__  -14.22    0.07 1.24 -15.87 -13.89 -13.04   340 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:52:21 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).

Let’s investigate the correlations.

mcmc_pairs(as.array(fit.m2.0), np = nuts_params(fit.m2.0),
           pars = c("a","b","sigma"), 
           off_diag_args = list(size = 1, alpha = 1/3),
           np_style = pairs_style_np(div_size=3, div_shape = 19))

Now, one of the ugly problems of population dynamics start rearing up its head. There’s a correlation between \(a\) and \(b\) (it’s not a dramatic one here, but it is annoying). It’s not just something peculiar to that dataset, see the other one.

mcmc_pairs(as.array(fit.m2.1), np = nuts_params(fit.m2.1),
           pars = c("a","b","sigma"), 
           off_diag_args = list(size = 1, alpha = 1/3),
           np_style = pairs_style_np(div_size=3, div_shape = 19))

So, how to fix this? Most fixes involve some kind of re-parametrization or an informative prior (or in some cases, adding covariates to parameter). In that particular case, there’s a very easy fix. We often use that model close to equilibrium, so we can use the transformation \(\tilde{x} = x - \bar{x}\) to study only deviations of log-abundance to the mean log-abundance. This transforms the model into

\[ \tilde{x}_{t+1} = b \tilde{x}_t + \epsilon_t, \, \epsilon_t \sim \mathcal{N}(0,\sigma^2).\]

Parameter \(a\) is then eliminated from the centered version of the model and we get rid of the problem.

[Adding covariates on the population growth rate – may solve some identifiability problems on \(a\)]

Logistic growth

Now we come to this revered, almost two-centuries old model. In continuous-time, \(\frac{dN(t)}{dt} = rN(1-\frac{N}{K}) = rN- \gamma N^2\). It says that individual fitness declines linearly with population size (by contrast, in the Gompertz pop. model it declined logarithmically with population size). We could fit directly this model as an ODE. In fact, unlike many other models this one can be explicitely integrated (using some algebra) and one can derive that

\[ N(t) = \frac{ N(0) e^{rt}}{1+ \frac{N(0)}{K} (e^{rt}-1) }\]

or equivalently

\[ N(t+1) = \frac{ N(t) e^{r}}{1+ \frac{N(t)}{K} (e^{r}-1) }\]

which is usually denoted

\[ N_{t+1} = \frac{ N(t) R}{1+\alpha N(t) }\] with the correspondance \(R= e^r\) and competition coefficient \(\alpha = \frac{e^{r}-1}{K}\) (instead of \(\gamma = \frac{r}{K}\) in continuous-time). The discrete-time form is usually preferred, because most data we can have access is sampled from year to year or some other regular interval in time.

This model is rather popular in plant ecology nowadays, although usually called Beverton-Holt because of the origin of this functional form in fisheries science. We can introduce environmental stochasticity on the growth rate:

\[ N_{t+1} = \frac{ N(t) e^{r+\epsilon_t}}{1+\alpha N(t) }.\] Sometimes people write

\[ N(t+1) = \frac{ N(t) e^{r+\epsilon_t}}{1+ \frac{N(t)}{K'} }\] but \(K'\) is a “false carrying capacity” here, in the sense that the true equilibrium pop size \(N^* = K'(e^r-1)\) which can be much higher for large \(r\).

Let’s now simulate that model and fit it, with different parameterizations. We will try

  • \((r,\alpha)\)
  • \((r,K)\)
  • \((r,K')\)

We consider two values of \(r\) (0.1 and 2) and two levels of \(\sigma^2\), 0.05 and 0.5.

set.seed(42)

alpha=0.1
Kprim=1/alpha #Kprim=10
tmax=50

####Let's start with r=0.1
r=0.1
R=exp(r) #1.0105
K=(exp(r)-1)/alpha #K=1.05

N_BH=N_BH1=rep(NA,tmax)
N_BH[1]=N_BH1[1]=1
for (t in 1:(tmax-1)){N_BH[t+1] = (exp(r+rnorm(1,0,sqrt(0.05))))*N_BH[t]/(1+alpha*N_BH[t])}
for (t in 1:(tmax-1)){N_BH1[t+1] = (exp(r+rnorm(1,0,sqrt(0.5))))*N_BH1[t]/(1+alpha*N_BH1[t])}

###

par(mfrow=c(1,2),pch=20)
plot(1:tmax,N_BH,type="o",ylim=range(c(N_BH,N_BH1)))
lines(1:tmax,N_BH1,type="o",col="blue")
legend("topleft",c(expression(paste(sigma^"2","=0.05",sep="")),expression(paste(sigma^"2","=0.5",sep=""))),col=c("black","blue"),lty=1,pch=16,bty="n")


####And go on with r=2
r=2
R=exp(r) #7.4
K=(exp(r)-1)/alpha #K=63.9

N_BH.2=N_BH1.2=rep(NA,tmax)
N_BH.2[1]=N_BH1.2[1]=1
for (t in 1:(tmax-1)){N_BH.2[t+1] = (exp(r+rnorm(1,0,sqrt(0.05))))*N_BH.2[t]/(1+alpha*N_BH.2[t])}
for (t in 1:(tmax-1)){N_BH1.2[t+1] = (exp(r+rnorm(1,0,sqrt(0.5))))*N_BH1.2[t]/(1+alpha*N_BH1.2[t])}

###

plot(1:tmax,N_BH.2,type="o",ylim=range(c(N_BH.2,N_BH1.2)))
lines(1:tmax,N_BH1.2,type="o",col="blue")
legend("topleft",c(expression(paste(sigma^"2","=0.05",sep="")),expression(paste(sigma^"2","=0.5",sep=""))),col=c("black","blue"),lty=1,pch=16,bty="n")

m30.data <- list(x=log(N_BH), tmax = tmax)
m31.data <- list(x=log(N_BH1), tmax = tmax)
m30.2.data <- list(x=log(N_BH.2), tmax = tmax)
m31.2.data <- list(x=log(N_BH1.2), tmax = tmax)

store_results=matrix(NA,nrow=12,ncol=5,dimnames=list(c("d0.m1","d1.m1","d0.2.m1","d1.2.m1","d0.m2","d1.m2","d0.2.m2","d1.2.m2","d0.m3","d1.m3","d0.2.m3","d1.2.m3"),c("r","alpha","sigma","cor(r,DD)","likelihood")))

We start with a (r,\(\alpha\)) structure.

data {                                             // observed variables
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variable

}
parameters {                                       // unobserved parameters
  real r;                                          // growth rate
  real<lower=0> alpha;                             // density-dependence
  real<lower=0> sigma;
}
model {

  //priors
  r ~ normal(0,1);
  alpha ~ exponential(10);
  sigma ~ exponential(10);

  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(r+x[t]-log(1+alpha*exp(x[t])),sigma);
  }
}

How do the 4 parameterizations work with this structure of the model?

fit.m3.1_0 <- sampling(m3.1, data = m30.data, iter = 1000, chains = 2, cores = 2)
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m3.1_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 1176d3ca877c6925f754a4cf44f3db44.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.06    0.00 0.06 -0.02  0.05  0.13   329 1.00
## alpha  0.07    0.00 0.05  0.01  0.06  0.13   336 1.00
## sigma  0.26    0.00 0.03  0.23  0.26  0.30   309 1.01
## lp__  33.23    0.13 1.51 31.08 33.63 34.66   146 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:53:04 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.1_0)
store_results["d0.m1","r"]=mean(tmp$r)
store_results["d0.m1","alpha"]=mean(tmp$alpha)
store_results["d0.m1","sigma"]=mean(tmp$sigma)
store_results["d0.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d0.m1","likelihood"]=mean(tmp$lp__)

Let’s use the inferred parameters for another simulation and compare to the actual dataset.

alpha_simu=mean(as.matrix(fit.m3.1_0,pars="alpha"))
r_simu=mean(as.matrix(fit.m3.1_0,pars="r"))
sigma_simu=mean(as.matrix(fit.m3.1_0,pars="sigma"))
N_simu=rep(NA,tmax)
N_simu[1]=1
for (t in 1:(tmax-1)){N_simu[t+1] = (exp(r_simu+rnorm(1,0,sigma_simu)))*N_simu[t]/(1+alpha_simu*N_simu[t])}

plot(1:tmax,N_simu,t="o",xlab="Time",ylim=range(c(N_simu,N_BH)))
lines(1:tmax,N_BH,col="blue")

mcmc_pairs(as.array(fit.m3.1_0), np = nuts_params(fit.m3.1_0),
           pars = c("r","alpha","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

fit.m3.1_1 <- sampling(m3.1, data = m31.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.1_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 1176d3ca877c6925f754a4cf44f3db44.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       0.27    0.01 0.16   0.07   0.26   0.48   426    1
## alpha   0.17    0.00 0.09   0.06   0.15   0.29   412    1
## sigma   0.63    0.00 0.06   0.56   0.63   0.71   459    1
## lp__  -14.98    0.08 1.36 -16.79 -14.64 -13.65   260    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:53:07 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.1_1)
store_results["d1.m1","r"]=mean(tmp$r)
store_results["d1.m1","alpha"]=mean(tmp$alpha)
store_results["d1.m1","sigma"]=mean(tmp$sigma)
store_results["d1.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d1.m1","likelihood"]=mean(tmp$lp__)
alpha_simu=mean(as.matrix(fit.m3.1_1,pars="alpha"))
r_simu=mean(as.matrix(fit.m3.1_1,pars="r"))
sigma_simu=mean(as.matrix(fit.m3.1_1,pars="sigma"))
N_simu=rep(NA,tmax)
N_simu[1]=1
for (t in 1:(tmax-1)){N_simu[t+1] = (exp(r_simu+rnorm(1,0,sigma_simu)))*N_simu[t]/(1+alpha_simu*N_simu[t])}

plot(1:tmax,N_simu,t="o",xlab="Time",ylim=range(c(N_simu,N_BH1)))
lines(1:tmax,N_BH1,col="blue")

mcmc_pairs(as.array(fit.m3.1_1), np = nuts_params(fit.m3.1_1),
           pars = c("r","alpha","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

fit.m3.1_0.2 <- sampling(m3.1, data = m30.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.1_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 1176d3ca877c6925f754a4cf44f3db44.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      2.04    0.01 0.18  1.80  2.04  2.27   346 1.00
## alpha  0.11    0.00 0.02  0.08  0.11  0.14   340 1.00
## sigma  0.21    0.00 0.02  0.18  0.21  0.24   306 1.01
## lp__  42.83    0.07 1.22 41.21 43.16 44.06   317 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:53:11 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.1_0.2)
store_results["d0.2.m1","r"]=mean(tmp$r)
store_results["d0.2.m1","alpha"]=mean(tmp$alpha)
store_results["d0.2.m1","sigma"]=mean(tmp$sigma)
store_results["d0.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d0.2.m1","likelihood"]=mean(tmp$lp__)
fit.m3.1_1.2 <- sampling(m3.1, data = m31.2.data, iter = 1000, chains = 2, cores = 2)# 2 divergent transitions, leaving that here but checking pairs
print(fit.m3.1_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 1176d3ca877c6925f754a4cf44f3db44.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       2.01    0.02 0.35   1.58   1.99   2.47   271 1.01
## alpha   0.12    0.00 0.05   0.07   0.11   0.19   298 1.01
## sigma   0.59    0.00 0.06   0.52   0.59   0.67   433 1.01
## lp__  -12.97    0.07 1.26 -14.56 -12.64 -11.74   356 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:53:13 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.1_1.2)
store_results["d1.2.m1","r"]=mean(tmp$r)
store_results["d1.2.m1","alpha"]=mean(tmp$alpha)
store_results["d1.2.m1","sigma"]=mean(tmp$sigma)
store_results["d1.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
store_results["d1.2.m1","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.1_1.2), np = nuts_params(fit.m3.1_1.2),
           pars = c("r","alpha","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

Let’s use the (r,K) formula.

data {                                             // observed variables
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variables

}
parameters {                                       // unobserved parameters
  real r;                                          // growth rate
  real<lower=0> K;                                 // carrying capacity
  real<lower=0> sigma;
}
model {

  //priors
  r ~ normal(0,1);
  K ~ exponential(1);                             //exponential(10) not working well
  sigma ~ exponential(10);

  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(r+x[t]-log(1+(exp(r)-1)*exp(x[t])/K),sigma);
  }
}
fit.m3.2_0 <- sampling(m3.2, data = m30.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999)) #65 divergent transitions without correcting adapt_delta
print(fit.m3.2_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: fb8ce8fe68c8f79db1336ea540c32cac.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.06    0.00 0.07 -0.01  0.06  0.15   386 1.01
## K      1.36    0.04 0.84  0.57  1.16  2.41   413 1.00
## sigma  0.26    0.00 0.03  0.23  0.26  0.29   456 1.00
## lp__  35.72    0.07 1.23 34.17 36.01 36.95   295 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:53:56 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.2_0)
store_results["d0.m2","r"]=mean(tmp$r)
K=mean(tmp$K)
alpha=exp(mean(tmp$r)-1)/K
store_results["d0.m2","alpha"]=alpha
store_results["d0.m2","sigma"]=mean(tmp$sigma)
store_results["d0.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d0.m2","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.2_0), np = nuts_params(fit.m3.2_0),
           pars = c("r","K","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

fit.m3.2_1 <- sampling(m3.2, data = m31.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999)) #97 divergent transitions  without correcting adapt_delta
## Warning: There were 2 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
print(fit.m3.2_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: fb8ce8fe68c8f79db1336ea540c32cac.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       0.34    0.01 0.19   0.09   0.33   0.60   289 1.01
## K       1.78    0.03 0.61   1.00   1.77   2.59   373 1.00
## sigma   0.63    0.00 0.06   0.56   0.63   0.71   459 1.00
## lp__  -12.57    0.06 1.12 -14.05 -12.32 -11.39   340 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:53:59 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.2_1)
K=mean(tmp$K)
alpha=exp(mean(tmp$r)-1)/K
store_results["d1.m2","r"]=mean(tmp$r)
store_results["d1.m2","alpha"]=alpha
store_results["d1.m2","sigma"]=mean(tmp$sigma)
store_results["d1.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d1.m2","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.2_1), np = nuts_params(fit.m3.2_1),
           pars = c("r","K","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

fit.m3.2_0.2 <- sampling(m3.2, data = m30.2.data, iter = 1000, chains = 2, cores = 2,control=list(adapt_delta=0.999)) #92 divergent transitions without correcting adapt_delta
## Warning: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
print(fit.m3.2_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: fb8ce8fe68c8f79db1336ea540c32cac.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##       mean se_mean   sd  10%  50%  90% n_eff Rhat
## r     0.00    0.00 0.00 0.00 0.00 0.01   266    1
## K     2.01    0.10 1.39 0.51 1.66 3.98   186    1
## sigma 0.47    0.00 0.05 0.40 0.46 0.53   308    1
## lp__  5.01    0.08 1.37 3.06 5.38 6.37   271    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:54:04 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.2_0.2)
K=mean(tmp$K)
alpha=exp(mean(tmp$r)-1)/K
store_results["d0.2.m2","r"]=mean(tmp$r)
store_results["d0.2.m2","alpha"]=alpha
store_results["d0.2.m2","sigma"]=mean(tmp$sigma)
store_results["d0.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d0.2.m2","likelihood"]=mean(tmp$lp__)
fit.m3.2_1.2 <- sampling(m3.2, data = m31.2.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999)) #128 divergent transitions without correcting 
## Warning: There were 3 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m3.2_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: fb8ce8fe68c8f79db1336ea540c32cac.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       0.01    0.00 0.01   0.00   0.01   0.02   182 1.01
## K       2.19    0.10 1.53   0.54   1.86   4.24   245 1.01
## sigma   0.84    0.00 0.08   0.75   0.84   0.94   303 1.00
## lp__  -30.25    0.08 1.25 -32.04 -29.92 -29.01   242 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:54:07 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.2_1.2)
K=mean(tmp$K)
alpha=exp(mean(tmp$r)-1)/K
store_results["d1.2.m2","r"]=mean(tmp$r)
store_results["d1.2.m2","alpha"]=alpha
store_results["d1.2.m2","sigma"]=mean(tmp$sigma)
store_results["d1.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
store_results["d1.2.m2","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.2_1.2), np = nuts_params(fit.m3.2_1.2),
           pars = c("r","K","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

And finally, the (r,K’) structure.

data {                                             // observed variables
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variables

}
parameters {                                       // unobserved parameters
  real r;                                          // growth rate
  real<lower=0> Kprim;                            // proxy for carrying capacity
  real<lower=0> sigma;
}
model {

  //priors
  r ~ normal(0,1);
  Kprim ~ exponential(10);
  sigma ~ exponential(10);

  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(r+x[t]-log(1+exp(x[t])/Kprim),sigma);
  }
}
fit.m3.3_0 <- sampling(m3.3, data = m30.data, iter = 1000, chains = 2, cores = 2)
print(fit.m3.3_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6827f6d5b16b71763ea96c5047d7c277.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##       mean se_mean   sd  10%  50%   90% n_eff Rhat
## r     0.53    0.01 0.10 0.41 0.53  0.66   328    1
## Kprim 1.46    0.02 0.32 1.06 1.43  1.89   397    1
## sigma 0.33    0.00 0.04 0.28 0.33  0.38   420    1
## lp__  9.69    0.06 1.11 8.29 9.94 10.81   406    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:54:49 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.3_0)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d0.m3","r"]=mean(tmp$r)
store_results["d0.m3","alpha"]=alpha
store_results["d0.m3","sigma"]=mean(tmp$sigma)
store_results["d0.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d0.m3","likelihood"]=mean(tmp$lp__)
fit.m3.3_1 <- sampling(m3.3, data = m31.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.3_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6827f6d5b16b71763ea96c5047d7c277.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       1.36    0.01 0.26   1.03   1.35   1.71   368    1
## Kprim   0.67    0.01 0.23   0.39   0.64   0.98   438    1
## sigma   0.70    0.00 0.07   0.61   0.69   0.79   286    1
## lp__  -25.32    0.07 1.23 -26.93 -25.00 -24.10   296    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:54:51 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.3_1)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d1.m3","r"]=mean(tmp$r)
store_results["d1.m3","alpha"]=alpha
store_results["d1.m3","sigma"]=mean(tmp$sigma)
store_results["d1.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d1.m3","likelihood"]=mean(tmp$lp__)
fit.m3.3_0.2 <- sampling(m3.3, data = m30.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
print(fit.m3.3_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6827f6d5b16b71763ea96c5047d7c277.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd  10%   50%   90% n_eff Rhat
## r      3.76    0.02 0.29 3.42  3.75  4.11   191 1.01
## Kprim  1.46    0.03 0.42 0.96  1.41  1.99   189 1.01
## sigma  0.28    0.00 0.03 0.24  0.28  0.32   287 1.01
## lp__  11.29    0.06 1.18 9.87 11.50 12.49   373 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:54:53 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.3_0.2)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d0.2.m3","r"]=mean(tmp$r)
store_results["d0.2.m3","alpha"]=alpha
store_results["d0.2.m3","sigma"]=mean(tmp$sigma)
store_results["d0.2.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d0.2.m3","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.3_0.2), np = nuts_params(fit.m3.3_0.2),
           pars = c("r","Kprim","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

fit.m3.3_1.2 <- sampling(m3.3, data = m31.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m3.3_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 6827f6d5b16b71763ea96c5047d7c277.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       4.37    0.02 0.36   3.91   4.36   4.81   405    1
## Kprim   0.72    0.01 0.26   0.43   0.67   1.07   414    1
## sigma   0.62    0.00 0.06   0.54   0.62   0.71   269    1
## lp__  -27.66    0.07 1.19 -29.29 -27.37 -26.44   278    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:54:57 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m3.3_1.2)
Kprim=mean(tmp$Kprim)
alpha=1/Kprim
store_results["d1.2.m3","r"]=mean(tmp$r)
store_results["d1.2.m3","alpha"]=alpha
store_results["d1.2.m3","sigma"]=mean(tmp$sigma)
store_results["d1.2.m3","cor(r,DD)"]=cor(tmp$r,tmp$Kprim)
store_results["d1.2.m3","likelihood"]=mean(tmp$lp__)
mcmc_pairs(as.array(fit.m3.3_1.2), np = nuts_params(fit.m3.3_1.2),
           pars = c("r","Kprim","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

Keeping in mind that r=0.1, r.2=2, alpha=0.1, sigma0=0.22 and sigma1=0.71

print(store_results)
##                   r      alpha     sigma  cor(r,DD) likelihood
## d0.m1   0.055784621 0.06697891 0.2609523  0.7753685  33.229131
## d1.m1   0.267700508 0.16548207 0.6312804  0.8077060 -14.975394
## d0.2.m1 2.036197303 0.11026445 0.2093074  0.9761566  42.827319
## d1.2.m1 2.007363845 0.12064842 0.5913581  0.9358252 -12.974754
## d0.m2   0.061523903 0.28746859 0.2603212 -0.2257367  35.720268
## d1.m2   0.341065877 0.29012458 0.6331161  0.5173358 -12.567976
## d0.2.m2 0.001697280 0.18359942 0.4658549  0.4713314   5.013017
## d1.2.m2 0.009831663 0.16931747 0.8408883  0.7924349 -30.254629
## d0.m3   0.530557618 0.68726480 0.3343254 -0.8676578   9.687652
## d1.m3   1.358301483 1.48447967 0.6967958 -0.8886456 -25.322274
## d0.2.m3 3.758413237 0.68304212 0.2822571 -0.9654904  11.293404
## d1.2.m3 4.368329562 1.39374197 0.6249068 -0.9387567 -27.660457

Fairly different qualities of fit and correlations between parameters. Note FB: From previous fits I had, \((r,K)\) could a bit better in some cases than \((r,K')\) and perhaps \((r,\alpha)\) (in terms of reducing the \((r,K)\) correlation), but the estimation of the density-dependence was worse with \((r,K)\) here. ]

Other forms of discrete-time near-logistic growth

We will consider here

Back to the basics of logistic growth in discrete time. Unlike what you will find in a number of textbooks, equations like \(N_{t+1} = r' N_t (1 - N_t/K)\) are poor analogues to logistic growth (this one is called the logistic map, and although it has become famous for allowing all kinds of dynamical behaviour including chaos, it has the poor taste of allowing negative values).

A slightly better-behaved model, still quite far from the original logistic growth though, is the Ricker model \(N_{t+1} = N_t \exp(r(1 - N_t/K))\). The Ricker model has the good taste of allowing only positive values, and it can be derived from mechanistic individual based models as well. There are several such derivations (see Kisdi and Geritz and Brannstorm and Sumpter), but a very simple one comes from fish. Let’s say that fish produces larvae. The production process is a stochastic exponential growth \(e^{r+\epsilon_t}\). Then each larvae survives with a probability \(\exp(-\alpha N_t)\) that declines exponentially with \(N_t\), reflecting that once there are already many fishes around eating larvae, probability cannot decline so much more. Now we have \(N_{t+1} = N_t e^{r+\epsilon_t -\alpha N_t}\), which is the same Ricker model. You can think of the Ricker model as a logistic model with a delay in regulation that is the consequence of the discretization of time.

Here we will also try the two different parameterization, but we will vary \(r\) some more (adding \(r=3.5\)) to make the model exhibit varied dynamics.

set.seed(42)

alpha=0.1
tmax=50

####Let's start with r=0.1
r=0.1
K=r/alpha #K=1.

N_R=N_R1=rep(NA,tmax)
N_R[1]=N_R1[1]=1
for (t in 1:(tmax-1)){N_R[t+1] = (N_R[t]*exp(r-alpha*N_R[t]+rnorm(1,0,sqrt(0.05))))}
for (t in 1:(tmax-1)){N_R1[t+1] = (N_R1[t]*exp(r-alpha*N_R1[t]+rnorm(1,0,sqrt(0.5))))}

###

par(mfrow=c(3,1),pch=20)
plot(1:tmax,N_R,type="o",ylim=range(c(N_R,N_R1)),xlab="Time",main="r=0.1")
lines(1:tmax,N_R1,type="o",col="blue")
legend("topleft",c(expression(paste(sigma^"2","=0.05",sep="")),expression(paste(sigma^"2","=0.5",sep=""))),col=c("black","blue"),lty=1,pch=16,bty="n")


####And go on with r=2
r=2
K=r/alpha #K=20

N_R.2=N_R1.2=rep(NA,tmax)
N_R.2[1]=N_R1.2[1]=1
for (t in 1:(tmax-1)){N_R.2[t+1] = (N_R.2[t]*exp(r-alpha*N_R.2[t]+rnorm(1,0,sqrt(0.05))))}
for (t in 1:(tmax-1)){N_R1.2[t+1] = (N_R1.2[t]*exp(r-alpha*N_R1.2[t]+rnorm(1,0,sqrt(0.5))))}

###

plot(1:tmax,N_R.2,type="o",ylim=range(c(N_R.2,N_R1.2)),xlab="Time",main="r=2")
lines(1:tmax,N_R1.2,type="o",col="blue")

#And r=3.5
r=3.5
K=r/alpha #K=35

N_R.3=N_R1.3=rep(NA,tmax)
N_R.3[1]=N_R1.3[1]=1
for (t in 1:(tmax-1)){N_R.3[t+1] = (N_R.3[t]*exp(r-alpha*N_R.3[t]+rnorm(1,0,sqrt(0.05))))}
for (t in 1:(tmax-1)){N_R1.3[t+1] = (N_R1.3[t]*exp(r-alpha*N_R1.3[t]+rnorm(1,0,sqrt(0.5))))}

###

plot(1:tmax,N_R.3,type="o",ylim=range(c(N_R.3,N_R1.3)),xlab="Time",main="r=3")
lines(1:tmax,N_R1.3,type="o",col="blue")

m40.data <- list(x=log(N_R), tmax = tmax)
m41.data <- list(x=log(N_R1), tmax = tmax)
m40.2.data <- list(x=log(N_R.2), tmax = tmax)
m41.2.data <- list(x=log(N_R1.2), tmax = tmax)
m40.3.data <- list(x=log(N_R.3), tmax = tmax)
m41.3.data <- list(x=log(N_R1.3), tmax = tmax)

store_results=matrix(NA,nrow=12,ncol=5,dimnames=list(c("d0.m1","d1.m1","d0.2.m1","d1.2.m1","d0.3.m1","d1.3.m1","d0.m2","d1.m2","d0.2.m2","d1.2.m2","d0.3.m2","d1.3.m2"),c("r","alpha","sigma","cor(r,DD)","likelihood")))

We first try the model (r,\(alpha\)).

data {                                             // observed variables
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variables

}
parameters {                                       // unobserved parameters
  real r;                                          // growth rate
  real<lower=0> alpha;                            // density-dependence
  real<lower=0> sigma;
}
model {

  //priors
  r ~ normal(0,1);
  alpha ~ exponential(10);
  sigma ~ exponential(10);

  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(r+x[t]-alpha*exp(x[t]),sigma);
  }
}
fit.m4.1_0 <- sampling(m4.1, data = m40.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 62862889de95b702ccb13d6a1a90f804.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.05    0.00 0.06 -0.02  0.05  0.12   333 1.00
## alpha  0.06    0.00 0.04  0.01  0.06  0.12   289 1.00
## sigma  0.26    0.00 0.03  0.23  0.26  0.29   597 1.00
## lp__  33.31    0.07 1.33 31.60 33.61 34.67   333 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:55:39 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.1_0)
tmp_mean=lapply(tmp,mean)
store_results["d0.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d0.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_1 <- sampling(m4.1, data = m41.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_1 , probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 62862889de95b702ccb13d6a1a90f804.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       0.24    0.01 0.14   0.07   0.24   0.41   389 1.01
## alpha   0.13    0.00 0.05   0.07   0.13   0.20   382 1.01
## sigma   0.64    0.00 0.06   0.57   0.63   0.72   306 1.01
## lp__  -14.89    0.07 1.29 -16.70 -14.55 -13.62   341 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 09:32:51 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.1_1)
tmp_mean=lapply(tmp,mean)
store_results["d1.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d1.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_0.2 <- sampling(m4.1, data = m40.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_0.2 , probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 62862889de95b702ccb13d6a1a90f804.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      2.08    0.00 0.08  1.98  2.08  2.18   352 1.00
## alpha  0.11    0.00 0.00  0.10  0.11  0.11   307 1.00
## sigma  0.20    0.00 0.02  0.18  0.20  0.23   276 1.01
## lp__  44.13    0.08 1.39 42.41 44.47 45.48   304 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 09:32:53 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.1_0.2)
tmp_mean=lapply(tmp,mean)
store_results["d0.2.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d0.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_1.2 <- sampling(m4.1, data = m41.2.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 62862889de95b702ccb13d6a1a90f804.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       1.86    0.01 0.13   1.69   1.86   2.04   583 1.00
## alpha   0.10    0.00 0.01   0.09   0.10   0.10   502 1.00
## sigma   0.60    0.00 0.06   0.53   0.60   0.67   725 1.00
## lp__  -12.93    0.07 1.28 -14.70 -12.56 -11.62   368 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 09:32:55 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.1_1.2)
tmp_mean=lapply(tmp,mean)
store_results["d1.2.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d1.2.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_0.3 <- sampling(m4.1, data = m40.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_0.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 62862889de95b702ccb13d6a1a90f804.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      3.50    0.00 0.04  3.45  3.50  3.55   716 1.00
## alpha  0.10    0.00 0.00  0.10  0.10  0.10   624 1.00
## sigma  0.22    0.00 0.02  0.20  0.22  0.25   401 1.00
## lp__  36.22    0.08 1.36 34.51 36.56 37.57   315 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 09:32:57 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.1_0.3)
tmp_mean=lapply(tmp,mean)
store_results["d0.3.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d0.3.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)
fit.m4.1_1.3 <- sampling(m4.1, data = m41.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
print(fit.m4.1_1.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 62862889de95b702ccb13d6a1a90f804.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       3.49    0.00 0.12   3.33   3.49   3.65   627 1.00
## alpha   0.10    0.00 0.00   0.10   0.10   0.10   708 1.01
## sigma   0.74    0.00 0.07   0.66   0.74   0.84   637 1.00
## lp__  -30.16    0.06 1.27 -31.73 -29.82 -28.90   459 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 09:32:59 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.1_1.3)
tmp_mean=lapply(tmp,mean)
store_results["d1.3.m1",c("r","alpha","sigma","likelihood")]=unlist(tmp_mean)
store_results["d1.3.m1","cor(r,DD)"]=cor(tmp$r,tmp$alpha)

Now we can consider the model (r,K).

data {                                             // observed variables
  int<lower=1> tmax;                               // number of observations
  vector[tmax] x;                                  // state variables

}
parameters {                                       // unobserved parameters
  real r;                                          // growth rate
  real<lower=0> K;                                // density-dependence
  real<lower=0> sigma;
}
model {

  //priors
  r ~ normal(0,1);
  K ~ exponential(10);
  sigma ~ exponential(10);

  for (t in 1:(tmax-1)){
  x[t+1] ~ normal(x[t]+r*(1-exp(x[t])/K),sigma);
  }
}
fit.m4.2_0 <- sampling(m4.2, data = m40.data, iter = 1000, chains = 2, cores = 2,control=list(adapt_delta=0.999))
## Warning: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_0, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5a046b4880bf26dd6f4c2e49cb70acd8.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##        mean se_mean   sd   10%   50%   90% n_eff Rhat
## r      0.01    0.00 0.01  0.00  0.01  0.03    98 1.02
## K      0.23    0.02 0.17  0.06  0.19  0.46    95 1.02
## sigma  0.26    0.00 0.03  0.23  0.26  0.30   326 1.00
## lp__  32.90    0.16 1.49 30.87 33.29 34.28    89 1.02
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 09:33:41 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.2_0)
tmp_mean=lapply(tmp,mean)
store_results["d0.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d0.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
fit.m4.2_1 <- sampling(m4.2, data = m41.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999))
## Warning: There were 5 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
print(fit.m4.2_1, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5a046b4880bf26dd6f4c2e49cb70acd8.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       0.02    0.00 0.02   0.00   0.01   0.04   259 1.00
## K       0.23    0.01 0.16   0.06   0.19   0.43   260 1.01
## sigma   0.65    0.00 0.06   0.57   0.65   0.73   405 1.01
## lp__  -16.76    0.07 1.21 -18.44 -16.44 -15.52   282 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 09:33:43 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.2_1)
tmp_mean=lapply(tmp,mean)
store_results["d1.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d1.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
mcmc_pairs(as.array(fit.m4.2_1), np = nuts_params(fit.m4.2_1),
           pars = c("r","K","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

fit.m4.2_0.2 <- sampling(m4.2, data = m40.2.data, iter = 10000, chains = 2, cores = 2,control=list(adapt_delta=0.999))
## Warning: There were 24 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
print(fit.m4.2_0.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5a046b4880bf26dd6f4c2e49cb70acd8.
## 2 chains, each with iter=10000; warmup=5000; thin=1; 
## post-warmup draws per chain=5000, total post-warmup draws=10000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       0.00    0.00 0.00   0.00   0.00   0.01  1986    1
## K       0.22    0.00 0.15   0.06   0.18   0.42  1970    1
## sigma   0.81    0.00 0.08   0.72   0.81   0.91  3573    1
## lp__  -29.99    0.03 1.33 -31.84 -29.63 -28.69  2106    1
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:56:43 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.2_0.2)
tmp_mean=lapply(tmp,mean)
store_results["d0.2.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d0.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
mcmc_pairs(as.array(fit.m4.2_0.2), np = nuts_params(fit.m4.2_0.2),
           pars = c("r","K","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

fit.m4.2_1.2 <- sampling(m4.2, data = m41.2.data, iter = 1000, chains = 2, cores = 2, control=list(adapt_delta=0.999))
## Warning: There were 3 divergent transitions after warmup. Increasing adapt_delta above 0.999 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_1.2, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5a046b4880bf26dd6f4c2e49cb70acd8.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##         mean se_mean   sd    10%    50%    90% n_eff Rhat
## r       0.01    0.00 0.01   0.00   0.01   0.02   143 1.00
## K       0.23    0.01 0.17   0.06   0.19   0.46   153 1.00
## sigma   1.29    0.01 0.12   1.14   1.28   1.44   336 1.00
## lp__  -59.53    0.11 1.45 -61.23 -59.12 -58.19   166 1.01
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:56:49 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.2_1.2)
tmp_mean=lapply(tmp,mean)
store_results["d1.2.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d1.2.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
mcmc_pairs(as.array(fit.m4.2_1.2), np = nuts_params(fit.m4.2_1.2),
           pars = c("r","K","sigma"), 
           off_diag_args = list(size = 1,alpha=1),
           np_style = pairs_style_np(div_size=2, div_shape = 19))

Here, K is negative, which would imply a positive effect of density-dependence. This could be corrected in the model definition in stan.

fit.m4.2_0.3 <- sampling(m4.2, data = m40.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
## Warning: There were 11 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_0.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5a046b4880bf26dd6f4c2e49cb70acd8.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##          mean se_mean   sd     10%     50%     90% n_eff Rhat
## r        0.01    0.00 0.01    0.00    0.01    0.03   213 1.01
## K        0.24    0.01 0.16    0.09    0.20    0.43   213 1.01
## sigma    2.29    0.01 0.18    2.07    2.28    2.50   337 1.00
## lp__  -101.80    0.09 1.40 -103.35 -101.43 -100.59   221 1.00
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:56:53 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.2_0.3)
tmp_mean=lapply(tmp,mean)
store_results["d0.3.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d0.3.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)
fit.m4.2_1.3 <- sampling(m4.2, data = m41.3.data, iter = 1000, chains = 2, cores = 2)# control=list(adapt_delta=0.999))
## Warning: There were 18 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
## http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
## Warning: Examine the pairs() plot to diagnose sampling problems
## Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#bulk-ess
## Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
## Running the chains for more iterations may help. See
## http://mc-stan.org/misc/warnings.html#tail-ess
print(fit.m4.2_1.3, probs = c(0.10, 0.5, 0.9))
## Inference for Stan model: 5a046b4880bf26dd6f4c2e49cb70acd8.
## 2 chains, each with iter=1000; warmup=500; thin=1; 
## post-warmup draws per chain=500, total post-warmup draws=1000.
## 
##          mean se_mean   sd     10%     50%     90% n_eff Rhat
## r        0.02    0.00 0.01    0.01    0.02    0.04   134 1.01
## K        0.27    0.01 0.16    0.11    0.22    0.49   133 1.01
## sigma    2.62    0.02 0.22    2.36    2.61    2.90   162 1.03
## lp__  -114.28    0.12 1.44 -116.41 -113.87 -112.87   149 1.02
## 
## Samples were drawn using NUTS(diag_e) at Mon May  4 08:56:57 2020.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at 
## convergence, Rhat=1).
tmp=extract(fit.m4.2_1.3)
tmp_mean=lapply(tmp,mean)
store_results["d1.3.m2",c("r","alpha","sigma","likelihood")]=c(tmp_mean$r,tmp_mean$r/tmp_mean$K,tmp_mean$sigma,tmp_mean$lp__)
store_results["d1.3.m2","cor(r,DD)"]=cor(tmp$r,tmp$K)

[theta-Ricker for the next session? ample material for a session 2…]

Some literature on more refined models

A nice lifelike example, including more complex models such as models with delays https://onlinelibrary.wiley.com/doi/full/10.1111/j.1600-0587.2009.05604.x

See also the very smart https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.1002/ecs2.2215